DiscoverThe HealthTech Narrative PodcastEp 16 | Explainable AI - A Specific Use Case and General Principles w. Akash Parvatikar (HistoWiz)
Ep 16 | Explainable AI - A Specific Use Case and General Principles w. Akash Parvatikar (HistoWiz)

Ep 16 | Explainable AI - A Specific Use Case and General Principles w. Akash Parvatikar (HistoWiz)

Update: 2024-03-27
Share

Description

Back in 2021, Akash Parvatikar Co-authored a paper titled - 'Prototypical models for classifying high-risk atypical breast lesions'. The premise was that some breast lesions are particularly challenging to classify, leading to different pathologists making different diagnoses based on the same information. The paper aimed to reduce this variability by providing a consistent computational method. 

A key contribution of this work is that the AI model can explain its recommendations in a way that is useful for clinical practice. This is important because doctors need to understand why an AI system has made a particular diagnosis to trust and act on its recommendations.

In this episode, Akash discusses his work in relation to xAI in diagnosing breast cancer. He discusses the model he developed, which outperformed baseline models in detecting high-risk breast lesions. Akash emphasizes the importance of xAI in building trust and transparency in clinical decision-making tools and explains how his model incorporates explainability through feature detection and contribution scores.

Akash emphasizes the need for explanations to be provided frequently to end users, as it plays a crucial role in building trust in AI systems. He also highlights the importance of involving domain experts in the development of AI models.

0:00 Intro
1:29 Achieving consistent diagnosis is crucial in patient care
4:45 Why does non-uniformity in diagnosis exist in among pathologists 
9:40 What were the considerations for model preparation?
14:04 Components of the model and how they incorporate explainability
17:17 How explanation was incorporated - a simplified analogy 
22:52 How does xAI manifest itself for pathologists?
26:54 Should explainability be a feature across all AI models in healthcare?
29:29 So explainability in drug discovery is pointless?
31:00 Frequency of explainability
33:51 Domain experts need to be in the loop when building AI models
36:00 Outro 

You can reach out to Akash on his LinkedIn at - https://www.linkedin.com/in/akash007/

Comments 
In Channel
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Ep 16 | Explainable AI - A Specific Use Case and General Principles w. Akash Parvatikar (HistoWiz)

Ep 16 | Explainable AI - A Specific Use Case and General Principles w. Akash Parvatikar (HistoWiz)

Divyam Tripathi